2,330 research outputs found

    Robot@VirtualHome, an ecosystem of virtual environments and tools for realistic indoor robotic simulation

    Get PDF
    Simulations and synthetic datasets have historically empower the research in different service robotics-related problems, being revamped nowadays with the utilization of rich virtual environments. However, with their use, special attention must be paid so the resulting algorithms are not biased by the synthetic data and can generalize to real world conditions. These aspects are usually compromised when the virtual environments are manually designed. This article presents Robot@VirtualHome, an ecosystem of virtual environments and tools that allows for the management of realistic virtual environments where robotic simulations can be performed. Here “realistic” means that those environments have been designed by mimicking the rooms’ layout and objects appearing in 30 real houses, hence not being influenced by the designer’s knowledge. The provided virtual environments are highly customizable (lighting conditions, textures, objects’ models, etc.), accommodate meta-information about the elements appearing therein (objects’ types, room categories and layouts, etc.), and support the inclusion of virtual service robots and sensors. To illustrate the possibilities of Robot@VirtualHome we show how it has been used to collect a synthetic dataset, and also exemplify how to exploit it to successfully face two service robotics-related problems: semantic mapping and appearance-based localization.This work has been supported by the research projects WISER (DPI2017-84827-R), funded by the Spanish Government and financed by the European Regional Development’s funds (FEDER), ARPEGGIO (PID2020-117057GB-I00), funded by the European H2020 program, by the grant number FPU17/04512 and the UG PHD scholarship pro-gram from the University of Groningen. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal used for this research. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high performance computing cluste

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    ViMantic, a distributed robotic architecture for semantic mapping in indoor environments

    Get PDF
    Semantic maps augment traditional representations of robot workspaces, typically based on their geometry and/or topology, with meta-information about the properties, relations and functionalities of their composing elements. A piece of such information could be: fridges are appliances typically found in kitchens and employed to keep food in good condition. Thereby, semantic maps allow for the execution of high-level robotic tasks in an efficient way, e.g. “Hey robot, Store the leftover salad”. This paper presents ViMantic, a novel semantic mapping architecture for the building and maintenance of such maps, which brings together a number of features as demanded by modern mobile robotic systems, including: (i) a formal model, based on ontologies, which defines the semantics of the problem at hand and establishes mechanisms for its manipulation; (ii) techniques for processing sensory information and automatically populating maps with, for example, objects detected by cutting-edge CNNs; (iii) distributed execution capabilities through a client–server design, making the knowledge in the maps accessible and extendable to other robots/agents; (iv) a user interface that allows for the visualization and interaction with relevant parts of the maps through a virtual environment; (v) public availability, hence being ready to use in robotic platforms. The suitability of ViMantic has been assessed using Robot@Home, a vast repository of data collected by a robot in different houses. The experiments carried out consider different scenarios with one or multiple robots, from where we have extracted satisfactory results regarding automatic population, execution times, and required size in memory of the resultant semantic maps.This work has been supported by the research projects WISER (DPI2017-84827-R), funded by the Spanish Government and financed by the European Regional Development’s funds (FEDER), MoveCare (ICT-26-2016b-GA-732158), funded by the European H2020 program, and by a postdoc contract from the I-PPIT program of the University of Málaga, and the UG PHD scholarship program from the University of Groningen. Funding for open access charge: Universidad de Málaga/CBU

    The Fourteenth Data Release of the Sloan Digital Sky Survey: First Spectroscopic Data from the extended Baryon Oscillation Spectroscopic Survey and from the second phase of the Apache Point Observatory Galactic Evolution Experiment

    Get PDF
    The fourth generation of the Sloan Digital Sky Survey (SDSS-IV) has been in operation since July 2014. This paper describes the second data release from this phase, and the fourteenth from SDSS overall (making this, Data Release Fourteen or DR14). This release makes public data taken by SDSS-IV in its first two years of operation (July 2014-2016). Like all previous SDSS releases, DR14 is cumulative, including the most recent reductions and calibrations of all data taken by SDSS since the first phase began operations in 2000. New in DR14 is the first public release of data from the extended Baryon Oscillation Spectroscopic Survey (eBOSS); the first data from the second phase of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE-2), including stellar parameter estimates from an innovative data driven machine learning algorithm known as "The Cannon"; and almost twice as many data cubes from the Mapping Nearby Galaxies at APO (MaNGA) survey as were in the previous release (N = 2812 in total). This paper describes the location and format of the publicly available data from SDSS-IV surveys. We provide references to the important technical papers describing how these data have been taken (both targeting and observation details) and processed for scientific use. The SDSS website (www.sdss.org) has been updated for this release, and provides links to data downloads, as well as tutorials and examples of data use. SDSS-IV is planning to continue to collect astronomical data until 2020, and will be followed by SDSS-V.Comment: SDSS-IV collaboration alphabetical author data release paper. DR14 happened on 31st July 2017. 19 pages, 5 figures. Accepted by ApJS on 28th Nov 2017 (this is the "post-print" and "post-proofs" version; minor corrections only from v1, and most of errors found in proofs corrected
    corecore